1,993 research outputs found
Constructive Galois Connections: Taming the Galois Connection Framework for Mechanized Metatheory
Galois connections are a foundational tool for structuring abstraction in
semantics and their use lies at the heart of the theory of abstract
interpretation. Yet, mechanization of Galois connections remains limited to
restricted modes of use, preventing their general application in mechanized
metatheory and certified programming.
This paper presents constructive Galois connections, a variant of Galois
connections that is effective both on paper and in proof assistants; is
complete with respect to a large subset of classical Galois connections; and
enables more general reasoning principles, including the "calculational" style
advocated by Cousot.
To design constructive Galois connection we identify a restricted mode of use
of classical ones which is both general and amenable to mechanization in
dependently-typed functional programming languages. Crucial to our metatheory
is the addition of monadic structure to Galois connections to control a
"specification effect". Effectful calculations may reason classically, while
pure calculations have extractable computational content. Explicitly moving
between the worlds of specification and implementation is enabled by our
metatheory.
To validate our approach, we provide two case studies in mechanizing existing
proofs from the literature: one uses calculational abstract interpretation to
design a static analyzer, the other forms a semantic basis for gradual typing.
Both mechanized proofs closely follow their original paper-and-pencil
counterparts, employ reasoning principles not captured by previous
mechanization approaches, support the extraction of verified algorithms, and
are novel
Constructive Galois Connections
Galois connections are a foundational tool for structuring abstraction in
semantics and their use lies at the heart of the theory of abstract
interpretation. Yet, mechanization of Galois connections using proof assistants
remains limited to restricted modes of use, preventing their general
application in mechanized metatheory and certified programming.
This paper presents constructive Galois connections, a variant of Galois
connections that is effective both on paper and in proof assistants; is
complete with respect to a large subset of classical Galois connections; and
enables more general reasoning principles, including the "calculational" style
advocated by Cousot.
To design constructive Galois connections we identify a restricted mode of
use of classical ones which is both general and amenable to mechanization in
dependently-typed functional programming languages. Crucial to our metatheory
is the addition of monadic structure to Galois connections to control a
"specification effect." Effectful calculations may reason classically, while
pure calculations have extractable computational content. Explicitly moving
between the worlds of specification and implementation is enabled by our
metatheory.
To validate our approach, we provide two case studies in mechanizing existing
proofs from the literature: the first uses calculational abstract
interpretation to design a static analyzer; the second forms a semantic basis
for gradual typing. Both mechanized proofs closely follow their original
paper-and-pencil counterparts, employ reasoning principles not captured by
previous mechanization approaches, support the extraction of verified
algorithms, and are novel
Mechanically Verified Calculational Abstract Interpretation
Calculational abstract interpretation, long advocated by Cousot, is a
technique for deriving correct-by-construction abstract interpreters from the
formal semantics of programming languages.
This paper addresses the problem of deriving correct-by-verified-construction
abstract interpreters with the use of a proof assistant. We identify several
technical challenges to overcome with the aim of supporting verified
calculational abstract interpretation that is faithful to existing
pencil-and-paper proofs, supports calculation with Galois connections
generally, and enables the extraction of verified static analyzers from these
proofs. To meet these challenges, we develop a theory of Galois connections in
monadic style that include a specification effect. Effectful calculations may
reason classically, while pure calculations have extractable computational
content. Moving between the worlds of specification and implementation is
enabled by our metatheory.
To validate our approach, we give the first mechanically verified proof of
correctness for Cousot's "Calculational design of a generic abstract
interpreter." Our proof "by calculus" closely follows the original
paper-and-pencil proof and supports the extraction of a verified static
analyzer
A family of abstract interpretations for static analysis of concurrent higher-order programs
We develop a framework for computing two foundational analyses for concurrent
higher-order programs: (control-)flow analysis (CFA) and may-happen-in-parallel
analysis (MHP). We pay special attention to the unique challenges posed by the
unrestricted mixture of first-class continuations and dynamically spawned
threads. To set the stage, we formulate a concrete model of concurrent
higher-order programs: the P(CEK*)S machine. We find that the systematic
abstract interpretation of this machine is capable of computing both flow and
MHP analyses. Yet, a closer examination finds that the precision for MHP is
poor. As a remedy, we adapt a shape analytic technique-singleton abstraction-to
dynamically spawned threads (as opposed to objects in the heap). We then show
that if MHP analysis is not of interest, we can substantially accelerate the
computation of flow analysis alone by collapsing thread interleavings with a
second layer of abstraction.Comment: The 18th International Static Analysis Symposium (SAS 2011
Abstracting Abstract Machines: A Systematic Approach to Higher-Order Program Analysis
Predictive models are fundamental to engineering reliable software systems.
However, designing conservative, computable approximations for the behavior of
programs (static analyses) remains a difficult and error-prone process for
modern high-level programming languages. What analysis designers need is a
principled method for navigating the gap between semantics and analytic models:
analysis designers need a method that tames the interaction of complex
languages features such as higher-order functions, recursion, exceptions,
continuations, objects and dynamic allocation.
We contribute a systematic approach to program analysis that yields novel and
transparently sound static analyses. Our approach relies on existing
derivational techniques to transform high-level language semantics into
low-level deterministic state-transition systems (with potentially infinite
state spaces). We then perform a series of simple machine refactorings to
obtain a sound, computable approximation, which takes the form of a
non-deterministic state-transition systems with finite state spaces. The
approach scales up uniformly to enable program analysis of realistic language
features, including higher-order functions, tail calls, conditionals, side
effects, exceptions, first-class continuations, and even garbage collection.Comment: Communications of the ACM, Research Highligh
Deciding CFA is complete for EXPTIME
We give an exact characterization of the computational complexity of the
CFA hierarchy. For any , we prove that the control flow decision
problem is complete for deterministic exponential time. This theorem validates
empirical observations that such control flow analysis is intractable. It also
provides more general insight into the complexity of abstract interpretation.Comment: Appeared in The 13th ACM SIGPLAN International Conference on
Functional Programming (ICFP'08), Victoria, British Columbia, Canada,
September 200
Flow analysis, linearity, and PTIME
Flow analysis is a ubiquitous and much-studied component of compiler
technology---and its variations abound. Amongst the most well known is Shivers'
0CFA; however, the best known algorithm for 0CFA requires time cubic in the
size of the analyzed program and is unlikely to be improved. Consequently,
several analyses have been designed to approximate 0CFA by trading precision
for faster computation. Henglein's simple closure analysis, for example,
forfeits the notion of directionality in flows and enjoys an "almost linear"
time algorithm. But in making trade-offs between precision and complexity, what
has been given up and what has been gained? Where do these analyses differ and
where do they coincide?
We identify a core language---the linear -calculus---where 0CFA,
simple closure analysis, and many other known approximations or restrictions to
0CFA are rendered identical. Moreover, for this core language, analysis
corresponds with (instrumented) evaluation. Because analysis faithfully
captures evaluation, and because the linear -calculus is complete for
PTIME, we derive PTIME-completeness results for all of these analyses.Comment: Appears in The 15th International Static Analysis Symposium (SAS
2008), Valencia, Spain, July 200
Semantic Solutions to Program Analysis Problems
Problems in program analysis can be solved by developing novel program
semantics and deriving abstractions conventionally. For over thirty years,
higher-order program analysis has been sold as a hard problem. Its solutions
have required ingenuity and complex models of approximation. We claim that this
difficulty is due to premature focus on abstraction and propose a new approach
that emphasizes semantics. Its simplicity enables new analyses that are beyond
the current state of the art
Abstracting Abstract Control (Extended)
The strength of a dynamic language is also its weakness: run-time flexibility
comes at the cost of compile-time predictability. Many of the hallmarks of
dynamic languages such as closures, continuations, various forms of reflection,
and a lack of static types make many programmers rejoice, while compiler
writers, tool developers, and verification engineers lament. The dynamism of
these features simply confounds statically reasoning about programs that use
them. Consequently, static analyses for dynamic languages are few, far between,
and seldom sound.
The "abstracting abstract machines" (AAM) approach to constructing static
analyses has recently been proposed as a method to ameliorate the difficulty of
designing analyses for such language features. The approach, so called because
it derives a function for the sound and computable approximation of program
behavior starting from the abstract machine semantics of a language, provides a
viable approach to dynamic language analysis since all that is required is a
machine description of the interpreter.
The original AAM recipe produces finite state abstractions, which cannot
faithfully represent an interpreter's control stack. Recent advances have shown
that higher-order programs can be approximated with pushdown systems. However,
these automata theoretic models either break down on features that inspect or
modify the control stack.
In this paper, we tackle the problem of bringing pushdown flow analysis to
the domain of dynamic language features. We revise the abstracting abstract
machines technique to target the stronger computational model of pushdown
systems. In place of automata theory, we use only abstract machines and
memoization. As case studies, we show the technique applies to a language with
closures, garbage collection, stack-inspection, and first-class composable
continuations.Comment: To appear at DLS '1
Stack-Summarizing Control-Flow Analysis of Higher-Order Programs
Two sinks drain precision from higher-order flow analyses: (1) merging of
argument values upon procedure call and (2) merging of return values upon
procedure return. To combat the loss of precision, these two sinks have been
addressed independently. In the case of procedure calls, abstract garbage
collection reduces argument merging; while in the case of procedure returns,
context-free approaches eliminate return value merging. It is natural to expect
a combined analysis could enjoy the mutually beneficial interaction between the
two approaches. The central contribution of this work is a direct product of
abstract garbage collection with context-free analysis. The central challenge
to overcome is the conflict between the core constraint of a pushdown system
and the needs of garbage collection: a pushdown system can only see the top of
the stack, yet garbage collection needs to see the entire stack during a
collection. To make the direct product computable, we develop "stack
summaries," a method for tracking stack properties at each control state in a
pushdown analysis of higher-order programs
- …